32 research outputs found

    Progettazione di sostituti valvolari decellularizzati: determinazione di detergenti residui

    Get PDF
    Questo studio si prefigge l’obiettivo di determinare più accuratamente e con un metodo diretto i detergenti residui nelle valvole biologiche sottoposte a decellularizzazione, al fine di mettere a confronto l’efficacia di quattro metodi che utilizzano i seguenti detergenti anionici: DOC, SDS, COL, TDOCope

    Towards better understanding of gradient-based attribution methods for Deep Neural Networks

    Full text link
    Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.Comment: ICLR 201

    An event-driven probabilistic model of sound source localization using cochlea spikes

    Full text link
    This work presents a probabilistic model that estimates the location of sound sources using the output spikes of a silicon cochlea such as the Dynamic Audio Sensor. Unlike previous work which estimated the source locations directly from the interaural time differences (ITDs) extracted from the timing of the cochlea spikes, the spikes are used instead to support a distribution model of the ITDs representing possible locations of sound sources. Results on noisy single speaker recordings show average accuracies of approximately 80% on detecting the correct source locations and an estimation lag of <;100ms

    Temporal clusters of age-related behavioral alterations captured in smartphone touchscreen interactions

    Get PDF
    Cognitive and behavioral abilities alter across the adult life span. Smartphones engage various cognitive functions and the corresponding touchscreen interactions may help resolve if and how the behavioral dynamics are structured by aging. Here, in a sample spanning the adult lifespan (16 to 86 years, N = 598, accumulating 355 million interactions) we analyzed a range of interaction intervals – from a few milliseconds to a minute. We clustered the interactions according to their next inter-touch interval dynamics to discover age-related changes at the distinct temporal clusters. There were age-related behavioral losses at the clusters occupying short intervals (~ 100 ms, R2 ~ 0.8) but gains at the long intervals (~ 4 s, R2 ~ 0.4). These correlates were independent of the self-reported years of experience on the phone or the choice of fingers used on the screen. Our approach revealed a sophisticated form of behavioral aging where individuals simultaneously demonstrated accelerated aging in one behavioral cluster versus a deceleration in another. In contrast to these strong correlations, cognitive tests probing sensorimotor, working memory, and executive processes revealed rather weak age-related decline. Contrary to the common notion of a simple behavioral decline with age based on conventional cognitive tests, we show that real-world behavior does not simply decline and the nature of aging systematically varies according to the underlying temporal dynamics. Of all the imaginable factors determining smartphone interactions in the real world, age-sensitive cognitive and behavioral processes can dominatingly dictate smartphone temporal dynamics

    Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception

    Full text link
    Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject’s brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies

    A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

    Get PDF
    The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies
    corecore